Vision-based MAV Navigation: Implementation Challenges Towards a Usable System in Real-Life Scenarios
نویسندگان
چکیده
In this workshop, we share our experience on autonomous flights with a MAV using a monocular camera as the only exteroceptive sensor. We developed a flexible and modular framework for realtime onboard MAV navigation fusing visual data with IMU at a rate of 1 kHz. We aim to provide a detailed insight of how our modules work and how they are used in our system. Using this framework, we achieved unprecedented MAV navigation autonomy, with flights of more than 350 m length at altitudes between 0 m and 70 m, in previously unknown areas while global signals as GPS were not used. The great advances in Micro Aerial Vehicle (MAV) technology of about 1.5 kg, have sparked great research interest with the prospect of employing them in search and rescue or inspection scenarios. As a result, we have seen some truly impressive systems showcasing aggressive flight maneuvers such as multiple flips [3] and docking to an oblique wall [4]. It is worth noting however, that all these systems can only operate within the coverage area of sophisticated external tracking systems (e.g. Vicon). It is only very recently that methods without such constraints started emerging, performing real-time, onboard MAV state estimation, pushing towards deployable MAVs in general environments. Given the computational constraints, most MAV navigation systems still struggle with long-term flight autonomy in large and unknown (outdoor) environments where global positioning signals are unreliable or unavailable, limiting the autonomy of MAVs and thus, their usability in real scenarios. Following this demand, we developed a powerful framework achieving unprecedented MAV navigation autonomy, with flights of more than 350 m length at altitudes between 0 m and 70 m, in previously unknown areas while global signals as GPS were not used. In this workshop, we share our experience on the development of our framework for real-time onboard MAV navigation, with a particular focus on the technical challenges we have been facing. With the most important parts of this framework just made available to the wider Robotics community, we aim to provide a detailed insight of how these modules work and how they are used in our system. We choose a minimal sensor suite onboard our MAV, consisting of a monocular camera for sensing the scene and an Inertial Measurement Unit (IMU) providing estimates of the MAV acceleration and angular velocity. While this choice implies low power consumption and wide applicability of algorithms (e.g. same sensor suite exists in smart phones), Fig. 1. The MAV operating in a desaster area, only using measurements from a monocular camera and IMU for localization the challenges within the estimation processes are great. Drawing inspiration from the latest advances in monocular Simultaneous Localization And Mapping (SLAM) we develop a visual odometry algorithm [6, 8], able to provide ego-motion and scene estimation at constant complexity which is key for autonomous long-term missions. As the visual pose estimates are prone to drift and can recover position only up to a relative scale, we take advantage of the complementary nature of visual and inertial cues to fuse the unsynchronized sensor readings within an Extended Kalman Filter (EKF) framework [7] for robust MAV state estimation. The 6 DoF MAV pose and velocity are estimated, as well as the visual scale and the intraand inter-sensor calibration parameters between the camera and the IMU. This scheme enables automatic and online sensor calibration during the mission, rendering our system truly power-on-and-go, while long-term navigation in large areas becomes possible, despite the drift in the visual estimates [5] (which most systems suffer from). Finally, with all computation running onboard, we eliminate the otherwise unrealistic reliance on a permanent communication link to a ground station, bringing MAV navigation closer to usability in real-life scenarios. We use a prototype of the “FireFly” hexacopter from Ascending Technologies, with similar IMU and soft-/hardware interfaces to the “Pelican”, but improved vibration damping for the sensors and failure tolerance of one rotor. As the most time-critical processes, position control [1, 8] and state prediction run at 1 kHz on a user-programmable microcontroller directly on the flight control hardware of the MAV, enabling immediate reactions on disturbances. The remaining processes run on an onboard 1.86 GHz Core2Duo embedded computer. Our framework uses the ROS middleware and runs on a standard Ubuntu operating system, facilitating the development of new algorithms. The current implementation, uses only 60 % of one core of the Core2Duo processor at 30 Hz, leaving enough resources for future higher-level tasks. As a reference, the same implementation on an Atom 1.6 GHz single-core computer runs at 20 Hz using 100 % of the processing power. Our framework has been thoroughly tested under a variety of challenging conditions, exhibiting robustness in the presence of wind gusts, strong light conditions causing saturated images and large scale changes of 0-70 m flight altitude, permitting vision-based landing (Fig. 2). Successful tracking in such scenarios demonstrates the power our robust sensor fusion and estimation approach, as these are often fatal in current SLAM systems. With our setup we were able to perform exploration flights of more than 350 m length (Fig. 3) resulting in an overall position drift of only 1.5 m. Explicitly addressing the computational restrictions typical in MAVs, our distributed processing approach across different units, ensures position control at 1 kHz and enables dynamic trajectory flights with 2 m/s track speed. Finally, the continuous online estimation of calibration and visual drift states, enables long term-navigation and eliminates the need of tedious pre-mission (re-)calibration procedures. On this basis, we believe that our system constitutes a milestone for vision-based MAV navigation in large, unknown and GPS-denied environments providing a reliable basis for further research towards complete missions of search and rescue or inspection scenarios, even with multiple MAVs [2]. Fig. 2. Altitude test: At z ≈ 8m we switch to vision-based navigation (using sensor feeds from a monocular camera and an IMU only) and fly up to 70 m without intended lateral motion. Then, we let the helicopter descend while applying lateral motion to show the maneuverability during this drastic height change. At the end of the trajectory in the graph, successful visionbased landing is shown, still using only the sensor feeds from the camera and the IMU. WEBLINKS The software used to achieve the above results has been made publicly available as ROS Packages: • http://www.asl.ethz.ch/research/software (Overview) • http://www.ros.org/wiki/ethzasl ptam • http://www.ros.org/wiki/ethzasl sensor fusion Fig. 3. Trajectory flown in a disaster area. After a short initialization phase at the start, vision-based navigation (blue) was switched on for successful completion of a more than 350 m-long trajectory, until battery limitations necessitated landing. The comparison of the estimated trajectory with the GPS ground truth (red) indicates a very low position and yaw drift of our real-time and onboard visual SLAM framework. • http://www.ros.org/wiki/asctec mav framework
منابع مشابه
Autonomous landing and ingress of micro-air-vehicles in urban environments based on monocular vision
Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions...
متن کاملGPS/INS Integration for Vehicle Navigation based on INS Error Analysis in Kalman Filtering
The Global Positioning System (GPS) and an Inertial Navigation System (INS) are two basic navigation systems. Due to their complementary characters in many aspects, a GPS/INS integrated navigation system has been a hot research topic in the recent decade. The Micro Electrical Mechanical Sensors (MEMS) successfully solved the problems of price, size and weight with the traditional INS. Therefore...
متن کاملRHC for Vision-Based Navigation of a WMR in an Urban Environment
This paper presents a control strategy for the navigation of an autonomous vehicle through an unknown environment with a focus on real-time implementation. The task at hand is to navigate from an initial position to a predetermined goal through an unknown environment using vision as the primary sensor. To solve this problem online the system must incorporate several component technologies: esti...
متن کاملGriffins ’ Solution to the European Robotics Challenges 2014
An important focus of current research in the field of Micro Aerial Vehicles (MAVs) is to increase the safety of their operation in general unstructured environments. An example of a real-world application is visual inspection of industry infrastructure, which can be greatly facilitated by autonomous multicopters. Currently, active research is pursued to improve real-time vision-based localizat...
متن کاملA Study of Various Feature Extraction Methods on a Motor Imagery Based Brain Computer Interface System
Introduction: Brain Computer Interface (BCI) systems based on Movement Imagination (MI) are widely used in recent decades. Separate feature extraction methods are employed in the MI data sets and classified in Virtual Reality (VR) environments for real-time applications. Methods: This study applied wide variety of features on the recorded data using Linear Discriminant Analysis (LDA) classifie...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2012